版本要從頭到尾相同ELK都是 這邊用7.12.0
原則上就是新增一大堆的yaml檔案
應該只需要一股腦先新增yaml檔案就對了,連動應該有寫好,這邊都還沒有涉獵到丟log所以按理來說應該是大家都會長一樣的,公用版本級別的文章(?),我覺得應該要出一個這樣的文章吧,或許是我自己沒找到。
我自己覺得有這篇文章的內容。架一個最基本的ELK上k8s應該是相對輕鬆很多很多。
如果按照上面的指令一股腦輸入進去 卻有任何error,請回報給我,這樣我也可以有修正的機會,我盡量看到就回,謝謝。
一定有不少文章可以直接幫你架好 但是如果跟著這樣一個一個架 應該會比較容易理解大家各自在幹嘛。
新增這三個yaml檔案
1.elasticsearch的設定檔案
2.Deployment
3.service
這邊先用Deployment比較好蓋 比較好理解 弄完之後在改成statefulSet
kind: ConfigMap
apiVersion: v1
metadata:
name: elasticsearch-config-yc
data:
elasticsearch.yml: |
cluster.name: "docker-cluster"
network.host: 0.0.0.0
xpack.license.self_generated.type: trial
xpack.monitoring.collection.enabled: true
kind: Deployment
apiVersion: apps/v1
metadata:
name: yc-elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: yc-elasticsearch
template:
metadata:
labels:
app: yc-elasticsearch
spec:
volumes:
- name: config
configMap:
name: elasticsearch-config-yc
defaultMode: 420
initContainers:
- name: increase-vm-max-map
image: busybox
command:
- sysctl
- '-w'
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: yc-elasticsearch
image: 'docker.elastic.co/elasticsearch/elasticsearch:7.12.0'
ports:
- containerPort: 9200
protocol: TCP
- containerPort: 9300
protocol: TCP
env:
- name: ES_JAVA_OPTS
value: '-Xms512m -Xmx512m'
- name: discovery.type
value: single-node
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
kind: Service
apiVersion: v1
metadata:
name: yc-elasticsearch
spec:
ports:
- name: yc-elasticsearch
protocol: TCP
port: 80
targetPort: 9200
selector:
app: yc-elasticsearch
type: ClusterIP
sessionAffinity: None
這邊可以打curl serviceIP 看如果有tagline" : "You Know, for Search 那就對了
elasticsearch不與其他人掛勾 所以通常ELK從E開始寫
下一個 logstash
kind: ConfigMap
apiVersion: v1
metadata:
name: logstash-config-yc
namespace: default
data:
logstash.yml: >
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [
"http://yc-elasticsearch.default.svc.cluster.local:80" ]
這地方之後如果要丟log了 是改這邊
kind: ConfigMap
apiVersion: v1
metadata:
name: logstash-pipelines-yc
data:
logstash.conf: |
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://yc-elasticsearch.default.svc.cluster.local:80"]
index => "log_test"
}
}
這地方之後如果要丟log了 這邊也可能會需要加幾個volumes。
kind: Deployment
apiVersion: apps/v1
metadata:
name: yc-logstash
spec:
replicas: 1
selector:
matchLabels:
app: yc-logstash
template:
metadata:
labels:
app: yc-logstash
spec:
volumes:
- name: config
configMap:
name: logstash-config-yc
defaultMode: 420
- name: pipelines
configMap:
name: logstash-pipelines-yc
defaultMode: 420
containers:
- name: yc-logstash
image: 'docker.elastic.co/logstash/logstash:7.12.0'
ports:
- containerPort: 5044
protocol: TCP
- containerPort: 5000
protocol: TCP
- containerPort: 5000
protocol: UDP
- containerPort: 9600
protocol: TCP
env:
- name: ELASTICSEARCH_HOST
value: 'http://yc-elasticsearch.default.svc.cluster.local'
- name: LS_JAVA_OPTS
value: '-Xms512m -Xmx512m'
volumeMounts:
- name: pipelines
mountPath: /usr/share/logstash/pipeline
- name: config
mountPath: /usr/share/logstash/config/logstash.yml
subPath: logstash.yml
kind: Service
apiVersion: v1
metadata:
name: yc-logstash
spec:
ports:
- name: logstash
protocol: TCP
port: 80
targetPort: 9600
- name: filebeat
protocol: TCP
port: 5044
targetPort: 5044
selector:
app: yc-logstash
type: ClusterIP
sessionAffinity: None
再來就是Kibana
1./usr/share/kibana/config/kibana.yml 的設定檔案
2.Deployment
3.service
kind: ConfigMap
apiVersion: v1
metadata:
name: kibana-config-yc
data:
kibana.yml: >
server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [
"http://yc-elasticsearch.default.svc.cluster.local:80" ]
monitoring.ui.container.elasticsearch.enabled: true
kind: Deployment
apiVersion: apps/v1
metadata:
name: yc-kibana
spec:
replicas: 1
selector:
matchLabels:
component: yc-kibana
template:
metadata:
labels:
component: yc-kibana
spec:
volumes:
- name: config
configMap:
name: kibana-config-yc
defaultMode: 420
containers:
- name: elk-kibana
image: 'docker.elastic.co/kibana/kibana:7.12.0'
ports:
- name: yc-kibana
containerPort: 5601
protocol: TCP
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config/kibana.yml
subPath: kibana.yml
kind: Service
apiVersion: v1
metadata:
name: yc-kibana
spec:
ports:
- name: yc-kibana
protocol: TCP
port: 80
targetPort: 5601
selector:
component: yc-kibana
type: LoadBalancer
到這邊就算是架完ELK了,再來就是匯入log
匯入log這邊介紹兩個方法
一個是宣告儲存體,在上服務的時候透過volumes把log寫在儲存體裡面,然後我們去讀取
(在logstash那邊也寫一個volumes 這樣就可以做簡單的測試了)
另一個是filebeat,這邊下集待續。
應該會滿有幫助的。
我們的作法 測試volumes 我們必須在儲存體蓋一個 永久性磁碟 去連結檔案用共(filesharing)的部分。
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: log-azurefile
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
volumeName: log-azurefile-yc
storageClassName: ''
volumeMode: Filesystem
status:
phase: Bound
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
kind: PersistentVolume
apiVersion: v1
metadata:
name: log-azurefile
spec:
capacity:
storage: 2Gi
azureFile:
secretName: elk-secret
shareName: yc/logs
secretNamespace: null
accessModes:
- ReadWriteMany
claimRef:
kind: PersistentVolumeClaim
namespace: default
name: log-azurefile
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
volumeMode: Filesystem
然後需要一個可以連到檔案共用的秘密
kind: Secret
apiVersion: v1
metadata:
name: elk-secret
namespace: default
data:
azurestorageaccountkey: xxxxxxx
azurestorageaccountname: xxxxxxxxx
type: Opaque
這樣就可以連到在檔案共共的yc/logs了
log-azurefile
然後我們回去logstash那邊 補上
volumes:
- name: volume-log
persistentVolumeClaim:
claimName: log-azurefile
跟
volumeMounts:
- name: volume-sso-log
mountPath: /usr/local/tomcat/logs
這樣就可以在logstash的deployment 的 /usr/local/tomcat/logs 地方跟ys/log 檔案共用互相打通
你在ys/log新增文件 同時也可以在logstash的/usr/local/tomcat/logs也有相同的文件喔,反之亦然。
在logstash-pipelines裡面+上file
input {
beats {
port => 5044
}
file{
path => "/usr/local/tomcat/logs/*.log"
}
}
先去kubectl exec -it logstash-xxxx bash
cd /usr/local/tomcat/logs/
看一下檔案有沒有打通
然後新增.log檔案 就可以看kibana是否有東西進去了